10 research outputs found

    Context-Aware Recursive Bayesian Graph Traversal in BCIs

    Full text link
    Noninvasive brain computer interfaces (BCI), and more specifically Electroencephalography (EEG) based systems for intent detection need to compensate for the low signal to noise ratio of EEG signals. In many applications, the temporal dependency information from consecutive decisions and contextual data can be used to provide a prior probability for the upcoming decision. In this study we proposed two probabilistic graphical models (PGMs), using context information and previously observed EEG evidences to estimate a probability distribution over the decision space in graph based decision-making mechanism. In this approach, user moves a pointer to the desired vertex in the graph in which each vertex represents an action. To select a vertex, a Select command, or a proposed probabilistic Selection criterion (PSC) can be used to automatically detect the user intended vertex. Performance of different PGMs and Selection criteria combinations are compared over a keyboard based on a graph layout. Based on the simulation results, probabilistic Selection criterion along with the probabilistic graphical model provides the highest performance boost for individuals with pour calibration performance and achieving the same performance for individuals with high calibration performance.Comment: This work has been submitted to EMBC 201

    Decoding Complex Imagery Hand Gestures

    Full text link
    Brain computer interfaces (BCIs) offer individuals suffering from major disabilities an alternative method to interact with their environment. Sensorimotor rhythm (SMRs) based BCIs can successfully perform control tasks; however, the traditional SMR paradigms intuitively disconnect the control and real task, making them non-ideal for complex control scenarios. In this study, we design a new, intuitively connected motor imagery (MI) paradigm using hierarchical common spatial patterns (HCSP) and context information to effectively predict intended hand grasps from electroencephalogram (EEG) data. Experiments with 5 participants yielded an aggregate classification accuracy--intended grasp prediction probability--of 64.5\% for 8 different hand gestures, more than 5 times the chance level.Comment: This work has been submitted to EMBC 201

    Code-VEP vs. Eye Tracking: A Comparison Study

    No full text
    Even with state-of-the-art techniques there are individuals whose paralysis prevents them from communicating with others. Brain–Computer-Interfaces (BCI) aim to utilize brain waves to construct a voice for those whose needs remain unmet. In this paper we compare the efficacy of a BCI input signal, code-VEP via Electroencephalography, against eye gaze tracking, among the most popular modalities used. These results, on healthy individuals without paralysis, suggest that while eye tracking works well for some, it does not work well or at all for others; the latter group includes individuals with corrected vision or those who squint their eyes unintentionally while focusing on a task. It is also evident that the performance of the interface is more sensitive to head/body movements when eye tracking is used as the input modality, compared to using c-VEP. Sensitivity to head/body movement could be better in eye tracking systems which are tracking the head or mounted on the face and are designed specifically as assistive devices. The sample interface developed for this assessment has the same reaction time when driven with c-VEP or with eye tracking; approximately 0.5–1 second is needed to make a selection among the four options simultaneously presented. Factors, such as system reaction time and robustness play a crucial role in participant preferences

    Recursive Bayesian coding for BCIs

    No full text
    Brain Computer Interfaces (BCI) seek to infer some task symbol, a task relevant instruction, from brain symbols, classifiable physiological states. For example, in a motor imagery robot control task a user would indicate their choice from a dictionary of task symbols (rotate arm left, grasp, etc.) by selecting from a smaller dictionary of brain symbols (imagined left or right hand movements). We examine how a BCI infers a task symbol using selections of brain symbols. We offer a recursive Bayesian decision framework which incorporates context prior distributions (e.g. language model priors in spelling applications), accounts for varying brain symbol accuracy and is robust to single brain symbol query errors. This framework is paired with Maximum Mutual Information (MMI) coding which maximizes a generalization of ITR. Both are applicable to any discrete task and brain phenomena (e.g. P300, SSVEP, MI). To demonstrate the efficacy of our approach we perform SSVEP “Shuffle” Speller experiments and compare our recursive coding scheme with traditional decision tree methods including Huffman coding. MMI coding leverages the asymmetry of the classifier’s mistakes across a particular user’s SSVEP responses; in doing so it offers a 33% increase in letter accuracy though it is 13% slower in our experiment
    corecore